social cue
Would you let a humanoid play storytelling with your child? A usability study on LLM-powered narrative Human-Robot Interaction
Lombardi, Maria, Calabrese, Carmela, Ghiglino, Davide, Foglino, Caterina, De Tommaso, Davide, Da Lisca, Giulia, Natale, Lorenzo, Wykowska, Agnieszka
A key challenge in human-robot interaction research lies in developing robotic systems that can effectively perceive and interpret social cues, facilitating natural and adaptive interactions. In this work, we present a novel framework for enhancing the attention of the iCub humanoid robot by integrating advanced perceptual abilities to recognise social cues, understand surroundings through generative models, such as ChatGPT, and respond with contextually appropriate social behaviour. Specifically, we propose an interaction task implementing a narrative protocol (storytelling task) in which the human and the robot create a short imaginary story together, exchanging in turn cubes with creative images placed on them. To validate the protocol and the framework, experiments were performed to quantify the degree of usability and the quality of experience perceived by participants interacting with the system. Such a system can be beneficial in promoting effective human robot collaborations, especially in assistance, education and rehabilitation scenarios where the social awareness and the robot responsiveness play a pivotal role.
- Research Report > Experimental Study (1.00)
- Questionnaire & Opinion Survey (1.00)
- Research Report > New Finding (0.93)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.68)
Theory of Mind and Self-Disclosure to CUIs
Self-disclosure is important to help us feel better, yet is often difficult. This difficulty can arise from how we think people are going to react to our self-disclosure. In this workshop paper, we briefly discuss self-disclosure to conversational user interfaces (CUIs) in relation to various social cues. We then, discuss how expressions of uncertainty or representation of a CUI's reasoning could help encourage self-disclosure, by making a CUI's intended "theory of mind" more transparent to users.
- North America > United States > New York > New York County > New York City (0.05)
- North America > Canada > Ontario > Waterloo Region > Waterloo (0.05)
- Europe > Denmark > North Jutland > Aalborg (0.05)
- (3 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Human Computer Interaction (0.91)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
LLMs are Introvert
Zhang, Litian, Zhang, Xiaoming, Yan, Bingyu, Zhou, Ziyi, Zhang, Bo, Guan, Zhenyu, Zhang, Xi, Li, Chaozhuo
The exponential growth of social media and generative AI has transformed information dissemination, fostering connectivity but also accelerating the spread of misinformation. Understanding information propagation dynamics and developing effective control strategies is essential to mitigate harmful content. Traditional models, such as SIR, provide basic insights but inadequately capture the complexities of online interactions. Advanced methods, including attention mechanisms and graph neural networks, enhance accuracy but typically overlook user psychology and behavioral dynamics. Large language models (LLMs), with their human-like reasoning, offer new potential for simulating psychological aspects of information spread. We introduce an LLM-based simulation environment capturing agents' evolving attitudes, emotions, and responses. Initial experiments, however, revealed significant gaps between LLM-generated behaviors and authentic human dynamics, especially in stance detection and psychological realism. A detailed evaluation through Social Information Processing Theory identified major discrepancies in goal-setting and feedback evaluation, stemming from the lack of emotional processing in standard LLM training. To address these issues, we propose the Social Information Processing-based Chain of Thought (SIP-CoT) mechanism enhanced by emotion-guided memory. This method improves the interpretation of social cues, personalization of goals, and evaluation of feedback. Experimental results confirm that SIP-CoT-enhanced LLM agents more effectively process social information, demonstrating behaviors, attitudes, and emotions closer to real human interactions. In summary, this research highlights critical limitations in current LLM-based propagation simulations and demonstrates how integrating SIP-CoT and emotional memory significantly enhances the social intelligence and realism of LLM agents.
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.68)
- Leisure & Entertainment (0.66)
- Media > News (0.48)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents > Agent Societies (0.70)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.48)
Lemurs use smell, social cues, and superior memories to find treats
While elephants have the reputation as animals who never forget, they may have some competition from some primates. Lemurs use their long-term memory in combination with smell and social cues to find hidden fruit. This technique may have deep evolutionary roots, according to a study published in the International Journal of Primatology. "Our study provides evidence that lemurs can integrate sensory information with ecological and social knowledge, which demonstrates their ability to consider multiple aspects of a problem," study co-author and New York University anthropologist Elena Cunningham said in a statement. Cunningham is a clinical professor of molecular pathobiology at NYU College of Dentistry.
- North America > United States > New York (0.26)
- Africa > Madagascar (0.07)
Social Cue Detection and Analysis Using Transfer Entropy
Jiang, Haoyang, Croft, Elizabeth A., Burke, Michael G.
Robots that work close to humans need to understand and use social cues to act in a socially acceptable manner. Social cues are a form of communication (i.e., information flow) between people. In this paper, a framework is introduced to detect and analyse a class of perceptible social cues that are nonverbal and episodic, and the related information transfer using an information-theoretic measure, namely, transfer entropy. We use a group-joining setting to demonstrate the practicality of transfer entropy for analysing communications between humans. Then we demonstrate the framework in two settings involving social interactions between humans: object-handover and person-following. Our results show that transfer entropy can identify information flows between agents and when and where they occur. Potential applications of the framework include information flow or social cue analysis for interactive robot design and socially-aware robot planning.
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- North America > United States > New York > New York County > New York City (0.05)
- Oceania > Australia > Victoria > Melbourne (0.04)
- (14 more...)
A Bayesian Framework for Cross-Situational Word-Learning
For infants, early word learning is a chicken-and-egg problem. One way to learn a word is to observe that it co-occurs with a particular referent across different situations. Another way is to use the social context of an utterance to infer the in- tended referent of a word. Here we present a Bayesian model of cross-situational word learning, and an extension of this model that also learns which social cues are relevant to determining reference. We test our model on a small corpus of mother-infant interaction and find it performs better than competing models. Fi- nally, we show that our model accounts for experimental phenomena including mutual exclusivity, fast-mapping, and generalization from social cues.
The Robot in the Room: Influence of Robot Facial Expressions and Gaze on Human-Human-Robot Collaboration
Fu, Di, Abawi, Fares, Wermter, Stefan
Robot facial expressions and gaze are important factors for enhancing human-robot interaction (HRI), but their effects on human collaboration and perception are not well understood, for instance, in collaborative game scenarios. In this study, we designed a collaborative triadic HRI game scenario, where two participants worked together to insert objects into a shape sorter. One participant assumed the role of a guide. The guide instructed the other participant, who played the role of an actor, by placing occluded objects into the sorter. A humanoid robot issued instructions, observed the interaction, and displayed social cues to elicit changes in the two participants' behavior. We measured human collaboration as a function of task completion time and the participants' perceptions of the robot by rating its behavior as intelligent or random. Participants also evaluated the robot by filling out the Godspeed questionnaire. We found that human collaboration was higher when the robot displayed a happy facial expression at the beginning of the game compared to a neutral facial expression. We also found that participants perceived the robot as more intelligent when it displayed a positive facial expression at the end of the game. The robot's behavior was also perceived as intelligent when directing its gaze toward the guide at the beginning of the interaction, not the actor. These findings provide insights into how robot facial expressions and gaze influence human behavior and perception in collaboration.
The Bystander Affect Detection (BAD) Dataset for Failure Detection in HRI
Bremers, Alexandra, Parreira, Maria Teresa, Fang, Xuanyu, Friedman, Natalie, Ramirez-Aristizabal, Adolfo, Pabst, Alexandria, Spasojevic, Mirjana, Kuniavsky, Michael, Ju, Wendy
For a robot to repair its own error, it must first know it has made a mistake. One way that people detect errors is from the implicit reactions from bystanders -- their confusion, smirks, or giggles clue us in that something unexpected occurred. To enable robots to detect and act on bystander responses to task failures, we developed a novel method to elicit bystander responses to human and robot errors. Using 46 different stimulus videos featuring a variety of human and machine task failures, we collected a total of 2452 webcam videos of human reactions from 54 participants. To test the viability of the collected data, we used the bystander reaction dataset as input to a deep-learning model, BADNet, to predict failure occurrence. We tested different data labeling methods and learned how they affect model performance, achieving precisions above 90%. We discuss strategies to model bystander reactions and predict failure and how this approach can be used in real-world robotic deployments to detect errors and improve robot performance. As part of this work, we also contribute with the "Bystander Affect Detection" (BAD) dataset of bystander reactions, supporting the development of better prediction models.
- North America > United States > New York > New York County > New York City (0.05)
- North America > Mexico (0.04)
- Europe > Portugal (0.04)
- (3 more...)
- Research Report (1.00)
- Questionnaire & Opinion Survey (0.68)
Using Social Cues to Recognize Task Failures for HRI: A Review of Current Research and Future Directions
Bremers, Alexandra, Pabst, Alexandria, Parreira, Maria Teresa, Ju, Wendy
Robots that carry out tasks and interact in complex environments will inevitably commit errors. Error detection is thus an important ability for robots to master, to work in an efficient and productive way. People leverage social cues from others around them to recognize and repair their own mistakes. With advances in computing and AI, it is increasingly possible for robots to achieve a similar error detection capability. In this work, we review current literature around the topic of how social cues can be used to recognize task failures for human-robot interaction (HRI). This literature review unites insights from behavioral science, human-robot interaction, and machine learning, to focus on three areas: 1) social cues for error detection (from behavioral science), 2) recognizing task failures in robots (from HRI), and 3) approaches for autonomous detection of HRI task failures based on social cues (from machine learning). We propose a taxonomy of error detection based on self-awareness and social feedback. Finally, we leave recommendations for HRI researchers and practitioners interested in developing robots that detect (physical) task errors using social cues from bystanders.
- North America > United States > New York > New York County > New York City (0.06)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > Washington > King County > Seattle (0.04)
- (15 more...)
- Overview (1.00)
- Research Report > New Finding (0.46)
Robot that learns social cues could feed people with tetraplegia
Robots that watch for social cues could feed people by gauging when they are ready for a mouthful. This may make it easier for people who can't feed themselves, such as those with tetraplegia, to socialise. People who can't control their legs or arms can use commercial robotic arms to help them eat.